This paper describes our two-stage system for the Euphemism Detection shared task hosted by the 3rd Workshop on Figurative Language Processing in conjunction with EMNLP 2022. Euphemisms tone down expressions about sensitive or unpleasant issues like addiction and death. The ambiguous nature of euphemistic words or expressions makes it challenging to detect their actual meaning within a context. In the first stage, we seek to mitigate this ambiguity by incorporating literal descriptions into input text prompts to our baseline model. It turns out that this kind of direct supervision yields remarkable performance improvement. In the second stage, we integrate visual supervision into our system using visual imageries, two sets of images generated by a text-to-image model by taking terms and descriptions as input. Our experiments demonstrate that visual supervision also gives a statistically significant performance boost. Our system achieved the second place with an F1 score of 87.2%, only about 0.9% worse than the best submission.
translated by 谷歌翻译
如何最好地将涉及语言和视觉的多模式任务中的语言和感知处理整合在一起是一个重要的开放问题。在这项工作中,我们认为,以自上而下的方式使用语言的常见做法,直接通过高级视觉特征进行视觉关注,这可能不是最佳的。我们假设使用语言还可以调节从像素到高级功能的自下而上处理可以为整体性能带来好处。为了支持我们的主张,我们提出了一个基于U-NET的模型,并对两个语言视觉密集预测任务进行实验:引用表达式分割和语言引导的图像着色。我们比较结果,其中一个或自下而上的视觉分支都以语言为条件。我们的实验表明,除了自上而下的注意力外,使用语言来控制自下而上的视觉处理,从而为任务和实现竞争性表现提供更好的结果。我们的语言分析表明,自下而上的条件改善对象的细分,尤其是当输入文本是指低级视觉概念时。代码可在https://github.com/ilkerkesen/bvpr上找到。
translated by 谷歌翻译
Two-stage robust optimization problems constitute one of the hardest optimization problem classes. One of the solution approaches to this class of problems is K-adaptability. This approach simultaneously seeks the best partitioning of the uncertainty set of scenarios into K subsets, and optimizes decisions corresponding to each of these subsets. In general case, it is solved using the K-adaptability branch-and-bound algorithm, which requires exploration of exponentially-growing solution trees. To accelerate finding high-quality solutions in such trees, we propose a machine learning-based node selection strategy. In particular, we construct a feature engineering scheme based on general two-stage robust optimization insights that allows us to train our machine learning tool on a database of resolved B&B trees, and to apply it as-is to problems of different sizes and/or types. We experimentally show that using our learned node selection strategy outperforms a vanilla, random node selection strategy when tested on problems of the same type as the training problems, also in case the K-value or the problem size differs from the training ones.
translated by 谷歌翻译
满意度测量,在今天的每个部门都出现,是许多公司的一个非常重要的因素。在本研究中,旨在通过使用yemek Sepeti的数据和该数据的变化来达到各种机器学习算法的最高精度率。每种算法的精度值都与所使用的各种自然语言处理方法一起计算。在计算这些精度值时,尝试优化使用的算法的参数。在本研究中培训的模型可以在未标记的数据上使用,并且可以在衡量客户满意度时给公司一个想法。观察到施加的3种不同的自然语言处理方法导致大部分开发模型中的大约5%的精度增加。
translated by 谷歌翻译
We introduce a new rule-based optimization method for classification with constraints. The proposed method takes advantage of linear programming and column generation, and hence, is scalable to large datasets. Moreover, the method returns a set of rules along with their optimal weights indicating the importance of each rule for learning. Through assigning cost coefficients to the rules and introducing additional constraints, we show that one can also consider interpretability and fairness of the results. We test the performance of the proposed method on a collection of datasets and present two case studies to elaborate its different aspects. Our results show that a good compromise between interpretability and fairness on the one side, and accuracy on the other side, can be obtained by the proposed rule-based learning method.
translated by 谷歌翻译